Progress and error messages
Progress message
Sometimes it may take the Agent too much time to generate response, for example, due to use of multiple tools or slow tool response. In such cases, consider using 'Progress message' and 'Progress timeout' parameters in the Miscellaneous screen to define progress message that will be played to user while agent is “thinking”.
-
'Progress timeout defines timeout in milliseconds' after which progress message will be played.
-
default value is 2000 (2 seconds)
-
if set to 0 (zero), message will be played immediately
-
'Progress message defines message' that will be played.
-
you may specify a single message. For example:
Let me think
-
you may specify multiple messages separated by comma and Agent will randomly choose one of them. For example:
Let me think, Just a moment
-
to specify single message that includes comma, enclose it in quotes. For example:
"umm, let me think"
-
to specify multiple messages where at least one of them includes commas, use JSON list syntax. For example:
["umm, let me think", "thinking"]
-
-
For multi-agent topologies 'Progress message' and 'Progress timeout' may be specified at "top-level" agent or at any "child" agent. "Child" agents that lack custom configuration inherit parameter values from the "top-level" agent.
Error message
If Live Hub AI Agents Framework fails to receive valid response from LLM or encounters some other unexpected error, it generates “error response” to user. The default value for this response is “Sorry, I didn't get it. Can you please say it again?”
You may change the value by configuring ‘Error message’ parameter in Miscellaneous tab in Agent configuration screen. For example:
Sorry, can you please rephrase your last question?
You can also customize the response provided to user when conversation is closed due to ‘Max turns’ configuration parameter. In order to do that specify max_turns_message advanced configuration parameter:
{
"max_turns_message": "Sorry, I need to hang up."
}
Empty LLM responses
Some LLMs may produce empty responses due to various technical issues or intermittent processing errors. By default AI Agents perform automatic retry upon empty LLM response, thus ensuring that conversation continues smoothly and user experience is not disrupted. If you prefer AI Agents to immediately generate error message upon empty LLM response, you may do so by setting empty_llm_response
advanced configuration parameter to ignore
.